Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Database
Language
Document Type
Year range
1.
Computers, Materials and Continua ; 70(3):4373-4391, 2022.
Article in English | Scopus | ID: covidwho-1481332

ABSTRACT

Coronavirus (COVID-19) infection was initially acknowledged as a global pandemic in Wuhan in China. World Health Organization (WHO) stated that the COVID-19 is an epidemic that causes a 3.4% death rate. Chest X-Ray (CXR) and Computerized Tomography (CT) screening of infected persons are essential in diagnosis applications. There are numerous ways to identify positive COVID-19 cases. One of the fundamental ways is radiology imaging through CXR, or CT images. The comparison of CT and CXR scans revealed that CT scans are more effective in the diagnosis process due to their high quality. Hence, automated classification techniques are required to facilitate the diagnosis process. Deep Learning (DL) is an effective tool that can be utilized for detection and classification this type of medical images. The deep Convolutional Neural Networks (CNNs) can learn and extract essential features from different medical image datasets. In this paper, a CNN architecture for automated COVID-19 detection from CXR and CT images is offered. Three activation functions as well as three optimizers are tested and compared for this task. The proposed architecture is built from scratch and the COVID-19 image datasets are directly fed to train it. The performance is tested and investigated on the CT and CXR datasets. Three activation functions: Tanh, Sigmoid, and ReLU are compared using a constant learning rate and different batch sizes. Different optimizers are studied with different batch sizes and a constant learning rate. Finally, a comparison between different combinations of activation functions and optimizers is presented, and the optimal configuration is determined. Hence, the main objective is to improve the detection accuracy of COVID-19 from CXR and CT images using DL by employing CNNs to classify medical COVID-19 images in an early stage. The proposed model achieves a classification accuracy of 91.67% on CXR image dataset, and a classification accuracy of 100% on CT dataset with training times of 58 min and 46 min on CXR and CT datasets, respectively. The best results are obtained using the ReLU activation function combined with the SGDM optimizer at a learning rate of 10-5 and a minibatch size of 16. © 2022 Tech Science Press. All rights reserved.

2.
Computers, Materials and Continua ; 69(2):2295-2312, 2021.
Article in English | Scopus | ID: covidwho-1329279

ABSTRACT

Lightweight deep convolutional neural networks (CNNs) present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients. Recently, advantages of portable Ultrasound (US) imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases. In this paper, a new framework of lightweight deep learning classifiers, namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images. Compared to traditional deep learning models, lightweight CNNs showed significant performance of real-time vision applications by usingmobile devices with limited hardware resources. Four main lightweight deep learning models, namely MobileNets, ShuffleNets, MENet and MnasNet have been proposed to identify the health status of lungs using US images. Public image dataset (POCUS) was used to validate our proposed COVID-LWNet framework successfully. Three classes of infectious COVID-19, bacterial pneumonia, and the healthy lungwere investigated in this study. The results showed that the performance of our proposedMnasNet classifier achieved the best accuracy score and shortest training time of 99.0% and 647.0 s, respectively. This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobilebased radiological tool for clinical diagnosis of COVID-19 and other lung diseases. © 2021 Tech Science Press. All rights reserved.

SELECTION OF CITATIONS
SEARCH DETAIL